238 research outputs found
The Mechanism of Additive Composition
Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell
and Lapata, 2010) is a widely used method for computing meanings of phrases,
which takes the average of vector representations of the constituent words. In
this article, we prove an upper bound for the bias of additive composition,
which is the first theoretical analysis on compositional frameworks from a
machine learning point of view. The bound is written in terms of collocation
strength; we prove that the more exclusively two successive words tend to occur
together, the more accurate one can guarantee their additive composition as an
approximation to the natural phrase vector. Our proof relies on properties of
natural language data that are empirically verified, and can be theoretically
derived from an assumption that the data is generated from a Hierarchical
Pitman-Yor Process. The theory endorses additive composition as a reasonable
operation for calculating meanings of phrases, and suggests ways to improve
additive compositionality, including: transforming entries of distributional
word vectors by a function that meets a specific condition, constructing a
novel type of vector representations to make additive composition sensitive to
word order, and utilizing singular value decomposition to train word vectors.Comment: More explanations on theory and additional experiments added.
Accepted by Machine Learning Journa
Controlled Generation with Prompt Insertion for Natural Language Explanations in Grammatical Error Correction
In Grammatical Error Correction (GEC), it is crucial to ensure the user's
comprehension of a reason for correction. Existing studies present tokens,
examples, and hints as to the basis for correction but do not directly explain
the reasons for corrections. Although methods that use Large Language Models
(LLMs) to provide direct explanations in natural language have been proposed
for various tasks, no such method exists for GEC. Generating explanations for
GEC corrections involves aligning input and output tokens, identifying
correction points, and presenting corresponding explanations consistently.
However, it is not straightforward to specify a complex format to generate
explanations, because explicit control of generation is difficult with prompts.
This study introduces a method called controlled generation with Prompt
Insertion (PI) so that LLMs can explain the reasons for corrections in natural
language. In PI, LLMs first correct the input text, and then we automatically
extract the correction points based on the rules. The extracted correction
points are sequentially inserted into the LLM's explanation output as prompts,
guiding the LLMs to generate explanations for the correction points. We also
create an Explainable GEC (XGEC) dataset of correction reasons by annotating
NUCLE, CoNLL2013, and CoNLL2014. Although generations from GPT-3 and ChatGPT
using original prompts miss some correction points, the generation control
using PI can explicitly guide to describe explanations for all correction
points, contributing to improved performance in generating correction reasons.Comment: Work in progres
Reducing Sequence Length by Predicting Edit Operations with Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance in
various tasks and gained significant attention. LLMs are also used for local
sequence transduction tasks, including grammatical error correction (GEC) and
formality style transfer, where most tokens in a source text are kept
unchanged. However, it is inefficient to generate all target tokens because a
prediction error of a target token may cause a catastrophe in predicting
subsequent tokens and because the computational cost grows quadratically with
the target sequence length. This paper proposes to predict a set of edit
operations for the source text for local sequence transduction tasks.
Representing an edit operation with a span of the source text and changed
tokens, we can reduce the length of the target sequence and thus the
computational cost for inference. We apply instruction tuning for LLMs on the
supervision data of edit operations. Experiments show that the proposed method
achieves comparable performance to the baseline in four tasks, paraphrasing,
formality style transfer, GEC, and text simplification, despite reducing the
length of the target text by as small as 21\%. Furthermore, we report that the
instruction tuning with the proposed method achieved the state-of-the-art
performance in the four tasks.Comment: Work in progres
The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated
Pre-trained language models trained on large-scale data have learned serious
levels of social biases. Consequently, various methods have been proposed to
debias pre-trained models. Debiasing methods need to mitigate only
discriminatory bias information from the pre-trained models, while retaining
information that is useful for the downstream tasks. In previous research,
whether useful information is retained has been confirmed by the performance of
downstream tasks in debiased pre-trained models. On the other hand, it is not
clear whether these benchmarks consist of data pertaining to social biases and
are appropriate for investigating the impact of debiasing. For example in
gender-related social biases, data containing female words (e.g. ``she, female,
woman''), male words (e.g. ``he, male, man''), and stereotypical words (e.g.
``nurse, doctor, professor'') are considered to be the most affected by
debiasing. If there is not much data containing these words in a benchmark
dataset for a target task, there is the possibility of erroneously evaluating
the effects of debiasing. In this study, we compare the impact of debiasing on
performance across multiple downstream tasks using a wide-range of benchmark
datasets that containing female, male, and stereotypical words. Experiments
show that the effects of debiasing are consistently \emph{underestimated}
across all tasks. Moreover, the effects of debiasing could be reliably
evaluated by separately considering instances containing female, male, and
stereotypical words than all of the instances in a benchmark dataset.Comment: IJCNLP-AACL 202
- …